Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 848
Filter
Add filters

Document Type
Year range
1.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20245449

ABSTRACT

The coronavirus disease 2019 (COVID-19) pandemic had a major impact on global health and was associated with millions of deaths worldwide. During the pandemic, imaging characteristics of chest X-ray (CXR) and chest computed tomography (CT) played an important role in the screening, diagnosis and monitoring the disease progression. Various studies suggested that quantitative image analysis methods including artificial intelligence and radiomics can greatly boost the value of imaging in the management of COVID-19. However, few studies have explored the use of longitudinal multi-modal medical images with varying visit intervals for outcome prediction in COVID-19 patients. This study aims to explore the potential of longitudinal multimodal radiomics in predicting the outcome of COVID-19 patients by integrating both CXR and CT images with variable visit intervals through deep learning. 2274 patients who underwent CXR and/or CT scans during disease progression were selected for this study. Of these, 946 patients were treated at the University of Pennsylvania Health System (UPHS) and the remaining 1328 patients were acquired at Stony Brook University (SBU) and curated by the Medical Imaging and Data Resource Center (MIDRC). 532 radiomic features were extracted with the Cancer Imaging Phenomics Toolkit (CaPTk) from the lung regions in CXR and CT images at all visits. We employed two commonly used deep learning algorithms to analyze the longitudinal multimodal features, and evaluated the prediction results based on the area under the receiver operating characteristic curve (AUC). Our models achieved testing AUC scores of 0.816 and 0.836, respectively, for the prediction of mortality. © 2023 SPIE.

2.
Proceedings of SPIE - The International Society for Optical Engineering ; 12602, 2023.
Article in English | Scopus | ID: covidwho-20245409

ABSTRACT

Nowadays, with the outbreak of COVID-19, the prevention and treatment of COVID-19 has gradually become the focus of social disease prevention, and most patients are also more concerned about the symptoms. COVID-19 has symptoms similar to the common cold, and it cannot be diagnosed based on the symptoms shown by the patient, so it is necessary to observe medical images of the lungs to finally determine whether they are COVID-19 positive. As the number of patients with symptoms similar to pneumonia increases, more and more medical images of the lungs need to be generated. At the same time, the number of physicians at this stage is far from meeting the needs of patients, resulting in patients unable to detect and understand their own conditions in time. In this regard, we have performed image augmentation, data cleaning, and designed a deep learning classification network based on the data set of COVID-19 lung medical images. accurate classification judgment. The network can achieve 95.76% classification accuracy for this task through a new fine-tuning method and hyperparameter tuning we designed, which has higher accuracy and less training time than the classic convolutional neural network model. © 2023 SPIE.

3.
2022 IEEE Information Technologies and Smart Industrial Systems, ITSIS 2022 ; 2022.
Article in English | Scopus | ID: covidwho-20245166

ABSTRACT

The World Health Organization has labeled the novel coronavirus illness (COVID-19) a pandemic since March 2020. It's a new viral infection with a respiratory tropism that could lead to atypical pneumonia. Thus, according to experts, early detection of the positive cases with people infected by the COVID-19 virus is highly needed. In this manner, patients will be segregated from other individuals, and the infection will not spread. As a result, developing early detection and diagnosis procedures to enable a speedy treatment process and stop the transmission of the virus has become a focus of research. Alternative early-screening approaches have become necessary due to the time-consuming nature of the current testing methodology such as Reverse transcription polymerase chain reaction (RT-PCR) test. The methods for detecting COVID-19 using deep learning (DL) algorithms using sound modality, which have become an active research area in recent years, have been thoroughly reviewed in this work. Although the majority of the newly proposed methods are based on medical images (i.e. X-ray and CT scans), we show in this comprehensive survey that the sound modality can be a good alternative to these methods, providing faster and easiest way to create a database with a high performance. We also present the most popular sound databases proposed for COVID-19 detection. © 2022 IEEE.

4.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12467, 2023.
Article in English | Scopus | ID: covidwho-20244646

ABSTRACT

It is important to evaluate medical imaging artificial intelligence (AI) models for possible implicit discrimination (ability to distinguish between subgroups not related to the specific clinical task of the AI model) and disparate impact (difference in outcome rate between subgroups). We studied potential implicit discrimination and disparate impact of a published deep learning/AI model for the prediction of ICU admission for COVID-19 within 24 hours of imaging. The IRB-approved, HIPAA-compliant dataset contained 8,357 chest radiography exams from February 2020-January 2022 (12% ICU admission within 24 hours) and was separated by patient into training, validation, and test sets (64%, 16%, 20% split). The AI output was evaluated in two demographic categories: sex assigned at birth (subgroups male and female) and self-reported race (subgroups Black/African-American and White). We failed to show statistical evidence that the model could implicitly discriminate between members of subgroups categorized by race based on prediction scores (area under the receiver operating characteristic curve, AUC: median [95% confidence interval, CI]: 0.53 [0.48, 0.57]) but there was some marginal evidence of implicit discrimination between members of subgroups categorized by sex (AUC: 0.54 [0.51, 0.57]). No statistical evidence for disparate impact (DI) was observed between the race subgroups (i.e. the 95% CI of the ratio of the favorable outcome rate between two subgroups included one) for the example operating point of the maximized Youden index but some evidence of disparate impact to the male subgroup based on sex was observed. These results help develop evaluation of implicit discrimination and disparate impact of AI models in the context of decision thresholds © COPYRIGHT SPIE. Downloading of the is permitted for personal use only.

5.
Proceedings of SPIE - The International Society for Optical Engineering ; 12567, 2023.
Article in English | Scopus | ID: covidwho-20244192

ABSTRACT

The COVID-19 pandemic has challenged many of the healthcare systems around the world. Many patients who have been hospitalized due to this disease develop lung damage. In low and middle-income countries, people living in rural and remote areas have very limited access to adequate health care. Ultrasound is a safe, portable and accessible alternative;however, it has limitations such as being operator-dependent and requiring a trained professional. The use of lung ultrasound volume sweep imaging is a potential solution for this lack of physicians. In order to support this protocol, image processing together with machine learning is a potential methodology for an automatic lung damage screening system. In this paper we present an automatic detection of lung ultrasound artifacts using a Deep Neural Network, identifying clinical relevant artifacts such as pleural and A-lines contained in the ultrasound examination taken as part of the clinical screening in patients with suspected lung damage. The model achieved encouraging preliminary results such as sensitivity of 94%, specificity of 81%, and accuracy of 89% to identify the presence of A-lines. Finally, the present study could result in an alternative solution for an operator-independent lung damage screening in rural areas, leading to the integration of AI-based technology as a complementary tool for healthcare professionals. © 2023 SPIE.

6.
IEEE Transactions on Radiation and Plasma Medical Sciences ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-20244069

ABSTRACT

Automatic lung infection segmentation in computed tomography (CT) scans can offer great assistance in radiological diagnosis by improving accuracy and reducing time required for diagnosis. The biggest challenges for deep learning (DL) models in segmenting infection region are the high variances in infection characteristics, fuzzy boundaries between infected and normal tissues, and the troubles in getting large number of annotated data for training. To resolve such issues, we propose a Modified U-Net (Mod-UNet) model with minor architectural changes and significant modifications in the training process of vanilla 2D UNet. As part of these modifications, we updated the loss function, optimization function, and regularization methods, added a learning rate scheduler and applied advanced data augmentation techniques. Segmentation results on two Covid-19 Lung CT segmentation datasets show that the performance of Mod-UNet is considerably better than the baseline U-Net. Furthermore, to mitigate the issue of lack of annotated data, the Mod-UNet is used in a semi-supervised framework (Semi-Mod-UNet) which works on a random sampling approach to progressively enlarge the training dataset from a large pool of unannotated CT slices. Exhaustive experiments on the two Covid-19 CT segmentation datasets and on a real lung CT volume show that the Mod-UNet and Semi-Mod-UNet significantly outperform other state-of-theart approaches in automated lung infection segmentation. IEEE

7.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20243842

ABSTRACT

This paper introduces the improved method for the COVID-19 classification based on computed tomography (CT) volumes using a combination of a complex-architecture convolutional neural network (CNN) and orthogonal ensemble networks (OEN). The novel coronavirus disease reported in 2019 (COVID-19) is still spreading worldwide. Early and accurate diagnosis of COVID-19 is required in such a situation, and the CT scan is an essential examination. Various computer-aided diagnosis (CAD) methods have been developed to assist and accelerate doctors' diagnoses. Although one of the effective methods is ensemble learning, existing methods combine some major models which do not specialize in COVID-19. In this study, we attempted to improve the performance of a CNN for the COVID-19 classification based on chest CT volumes. The CNN model specializes in feature extraction from anisotropic chest CT volumes. We adopt the OEN, an ensemble learning method considering inter-model diversity, to boost its feature extraction ability. For the experiment, We used chest CT volumes of 1283 cases acquired in multiple medical institutions in Japan. The classification result on 257 test cases indicated that the combination could improve the classification performance. © 2023 SPIE.

8.
International IEEE/EMBS Conference on Neural Engineering, NER ; 2023-April, 2023.
Article in English | Scopus | ID: covidwho-20243641

ABSTRACT

This study proposes a graph convolutional neural networks (GCN) architecture for fusion of radiological imaging and non-imaging tabular electronic health records (EHR) for the purpose of clinical event prediction. We focused on a cohort of hospitalized patients with positive RT-PCR test for COVID-19 and developed GCN based models to predict three dependent clinical events (discharge from hospital, admission into ICU, and mortality) using demographics, billing codes for procedures and diagnoses and chest X-rays. We hypothesized that the two-fold learning opportunity provided by the GCN is ideal for fusion of imaging information and tabular data as node and edge features, respectively. Our experiments indicate the validity of our hypothesis where GCN based predictive models outperform single modality and traditional fusion models. We compared the proposed models against two variations of imaging-based models, including DenseNet-121 architecture with learnable classification layers and Random Forest classifiers using disease severity score estimated by pre-trained convolutional neural network. GCN based model outperforms both imaging-only methods. We also validated our models on an external dataset where GCN showed valuable generalization capabilities. We noticed that edge-formation function can be adapted even after training the GCN model without limiting application scope of the model. Our models take advantage of this fact for generalization to external data. © 2023 IEEE.

9.
Proceedings of SPIE - The International Society for Optical Engineering ; 12587, 2023.
Article in English | Scopus | ID: covidwho-20243426

ABSTRACT

With the outbreak of covid-19 in 2020, timely and effective diagnosis and treatment of each covid-19 patient is particularly important. This paper combines the advantages of deep learning in image recognition, takes RESNET as the basic network framework, and carries out the experiment of improving the residual structure on this basis. It is tested on the open source new coronal chest radiograph data set, and the accuracy rate is 82.3%. Through a series of experiments, the training model has the advantages of good generalization, high accuracy and fast convergence. This paper proves the feasibility of the improved residual neural network in the diagnosis of covid-19. © 2023 SPIE.

10.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12469, 2023.
Article in English | Scopus | ID: covidwho-20242921

ABSTRACT

Medical Imaging and Data Resource Center (MIDRC) has been built to support AI-based research in response to the COVID-19 pandemic. One of the main goals of MIDRC is to make data collected in the repository ready for AI analysis. Due to data heterogeneity, there is a need to standardize data and make data-mining easier. Our study aims to stratify imaging data according to underlying anatomy using open-source image processing tools. The experiments were performed using Google Colaboratory on computed tomography (CT) imaging data available from the MIDRC. We adopted the existing open-source tools to process CT series (N=389) to define the image sub-volumes according to body part classification, and additionally identified series slices containing specific anatomic landmarks. Cases with automatically identified chest regions (N=369) were then processed to automatically segment the lungs. In order to assess the accuracy of segmentation, we performed outlier analysis using 3D shape radiomics features extracted from the left and right lungs. Standardized DICOM objects were created to store the resulting segmentations, regions, landmarks and radiomics features. We demonstrated that the MIDRC chest CT collections can be enriched using open-source analysis tools and that data available in MIDRC can be further used to evaluate the robustness of publicly available tools. © 2023 SPIE.

11.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20242839

ABSTRACT

The COVID-19 pandemic has made a dramatic impact on human life, medical systems, and financial resources. Due to the disease's pervasive nature, many different and interdisciplinary fields of research pivoted to study the disease. For example, deep learning (DL) techniques were employed early to assess patient diagnosis and prognosis from chest radiographs (CXRs) and computed tomography (CT) scans. While the use of artificial intelligence (AI) in the medical sector has displayed promising results, DL may suffer from lack of reproducibility and generalizability. In this study, the robustness of a pre-trained DL model utilizing the DenseNet-121 architecture was evaluated by using a larger collection of CXRs from the same institution that provided the original model with its test and training datasets. The current test set contained a larger span of dates, incorporated different strains of the virus, and included different immunization statuses. Considering differences in these factors, model performance between the original and current test sets was evaluated using area under the receiver operating characteristic curve (ROC AUC) [95% CI]. Statistical comparisons were performed using the Delong, Kolmogorov-Smirnov, and Wilcoxon rank-sum tests. Uniform manifold approximation and projection (UMAP) was used to help visualize whether underlying causes were responsible for differences in performance between test sets. In the task of classifying between COVID-positive and COVID-negative patients, the DL model achieved an AUC of 0.67 [0.65, 0.70], compared with the original performance of 0.76 [0.73, 0.79]. The results of this study suggest that underlying biases or overfitting may hinder performance when generalizing the model. © 2023 SPIE.

12.
Proceedings of the 10th International Conference on Signal Processing and Integrated Networks, SPIN 2023 ; : 590-596, 2023.
Article in English | Scopus | ID: covidwho-20242821

ABSTRACT

The successful elimination of the SARS-Cov2 virus has evaded the society and medical fraternity to date. Months have passed but the virus is still very much present amongst us though its severity and contagiousness have decreased. The pandemic which was first detected in Wuhan, China in late 2019 has had colossal ramifications for the societal, financial and physical well-being of humankind. Timely detection and isolation of infected persons is the only way to contain this contagion. One of the biggest hurdles in accurately detecting Covid-19 is its similarities to other thoracic ailments such as Lung cancer, bacterial and viral Pneumonia, tuberculosis and others. Differential observation is challenging due to identical radioscopic discoveries such as GGOs, crazy paving structures and their combinations. Thorax imaging such as X-rays(CXR) have proven to be an efficient and economical diagnostics for detecting Covid-19 Pneumonia. The proposed work aims at utilising three CNN models namely Inception-V3, DenseNet169 and VGG16 along with feature concatenation and Ensemble technique to correctly predict Covid-19 Pneumonia from Chest X-rays of patients. The Covid-19 Radiography dataset, having a total of 4839 CXR images, has been employed to evaluate the proposed model and accuracy, precision, recall and F1-Score of 97.74%, 97.78%, 97.73% and 97.75% has been obtained. The proposed system can assist medical professionals in detecting Covid-19 from a host of other pulmonary diseases with a high probability. © 2023 IEEE.

13.
ACM International Conference Proceeding Series ; : 12-21, 2022.
Article in English | Scopus | ID: covidwho-20242817

ABSTRACT

The global COVID-19 pandemic has caused a health crisis globally. Automated diagnostic methods can control the spread of the pandemic, as well as assists physicians to tackle high workload conditions through the quick treatment of affected patients. Owing to the scarcity of medical images and from different resources, the present image heterogeneity has raised challenges for achieving effective approaches to network training and effectively learning robust features. We propose a multi-joint unit network for the diagnosis of COVID-19 using the joint unit module, which leverages the receptive fields from multiple resolutions for learning rich representations. Existing approaches usually employ a large number of layers to learn the features, which consequently requires more computational power and increases the network complexity. To compensate, our joint unit module extracts low-, same-, and high-resolution feature maps simultaneously using different phases. Later, these learned feature maps are fused and utilized for classification layers. We observed that our model helps to learn sufficient information for classification without a performance loss and with faster convergence. We used three public benchmark datasets to demonstrate the performance of our network. Our proposed network consistently outperforms existing state-of-the-art approaches by demonstrating better accuracy, sensitivity, and specificity and F1-score across all datasets. © 2022 ACM.

14.
Computer Engineering and Applications Journal ; 12(2):71-78, 2023.
Article in English | ProQuest Central | ID: covidwho-20242189

ABSTRACT

COVID-19 is an infectious disease that causes acute respiratory distress syndrome due to the SARS-CoV-2 virus. Rapid and accurate screening and early diagnosis of patients play an essential role in controlling outbreaks and reducing the spread of this disease. This disease can be diagnosed by manually reading CXR images, but it is time-consuming and prone to errors. For this reason, this research proposes an automatic medical image segmentation system using a combination of U-Net architecture with Batch Normalization to obtain more accurate and fast results. The method used in this study consists of pre-processing using the CLAHE method and morphology opening, CXR image segmentation using a combination of U-Net-4 Convolution Block architecture with Batch Normalization, then evaluated using performance measures such as accuracy, sensitivity, specificity, F1-score, and IoU. The results showed that the U-Net architecture modified with Batch Normalization had successfully segmented CXR images, as seen from all performance measurement values above 94%.

15.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12470, 2023.
Article in English | Scopus | ID: covidwho-20241885

ABSTRACT

Stroke is a leading cause of morbidity and mortality throughout the world. Three-dimensional ultrasound (3DUS) imaging was shown to be more sensitive to treatment effect and more accurate in stratifying stroke risk than two-dimensional ultrasound (2DUS) imaging. Point-of-care ultrasound screening (POCUS) is important for patients with limited mobility and at times when the patients have limited access to the ultrasound scanning room, such as in the COVID-19 era. We used an optical tracking system to track the 3D position and orientation of the 2DUS frames acquired by a commercial wireless ultrasound system and subsequently reconstructed a 3DUS image from these frames. The tracking requires spatial and temporal calibrations. Spatial calibration is required to determine the spatial relationship between the 2DUS machine and the tracking system. Spatial calibration was achieved by localizing the landmarks with known coordinates in a custom-designed Z-fiducial phantom in an 2DUS image. Temporal calibration is needed to synchronize the clock of the wireless ultrasound system and the optical tracking system so that position and orientation detected by the optical tracking system can be registered to the corresponding 2DUS frame. Temporal calibration was achieved by initiating the scanning by an abrupt motion that can be readily detected in both systems. This abrupt motion establishes a common reference time point, thereby synchronizing the clock in both systems. We demonstrated that the system can be used to visualize the three-dimensional structure of a carotid phantom. The error rate of the measurements is 2.3%. Upon in-vivo validation, this system will allow POCUS carotid scanning in clinical research and practices. © 2023 SPIE.

16.
2023 9th International Conference on Advanced Computing and Communication Systems, ICACCS 2023 ; : 1671-1675, 2023.
Article in English | Scopus | ID: covidwho-20241041

ABSTRACT

A chronic respiratory disease known as pneumonia can be devastating if it is not identified and treated in a timely manner. For successful treatment and better patient outcomes, pneumonia must be identified early and properly classified. Deep learning has recently demonstrated considerable promise in the area of medical imaging and has successfully applied for a few image-based diagnosis tasks, including the identification and classification of pneumonia. Pneumonia is a respiratory illness that produces pleural effusion (a condition in which fluids flood the lungs). COVID-19 is becoming the major cause of the global rise in pneumonia cases. Early detection of this disease provides curative therapy and increases the likelihood of survival. CXR (Chest X-ray) imaging is a common method of detecting and diagnosing pneumonia. Examining chest X-rays is a difficult undertaking that often results in variances and inaccuracies. In this study, we created an automatic pneumonia diagnosis method, also known as a CAD (Computer-Aided Diagnosis), which may significantly reduce the time and cost of collecting CXR imaging data. This paper uses deep learning which has the potential to revolutionize in the area of medical imaging and has shown promising results in the detection and classification of pneumonia. Further research and development in this area is needed to improve the accuracy and reliability of these models and make them more accessible to healthcare providers. These models can provide fast and accurate results, with high sensitivity and specificity in identifying pneumonia in chest X-rays. © 2023 IEEE.

17.
2022 IEEE 14th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, HNICEM 2022 ; 2022.
Article in English | Scopus | ID: covidwho-20240818

ABSTRACT

This study compared five different image classification algorithms, namely VGG16, VGG19, AlexNet, DenseNet, and ConVNext, based on their ability to detect and classify COVID-19-related cases given chest X-ray images. Using performance metrics like accuracy, F1 score, precision, recall, and MCC compared these intelligent classification algorithms. Upon testing these algorithms, the accuracy for each model was quite unsatisfactory, ranging from 80.00% to 92.50%, provided it is for medical application. As such, an ensemble learning-based image classification model, made up of AlexNet and VGG19 called CovidXNet, was proposed to detect COVID-19 through chest X-ray images discriminating between health and pneumonic lung images. CovidXNet achieved an accuracy of 97.00%, which was significantly better considering past results. Further studies may be conducted to increase the accuracy, particularly for identifying and classifying chest radiographs for COVID-19-related cases, since the current model may still provide false negatives, which may be detrimental to the prevention of the spread of the virus. © 2022 IEEE.

18.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20240716

ABSTRACT

This paper proposes an automated classification method of COVID-19 chest CT volumes using improved 3D MLP-Mixer. Novel coronavirus disease 2019 (COVID-19) spreads over the world, causing a large number of infected patients and deaths. Sudden increase in the number of COVID-19 patients causes a manpower shortage in medical institutions. Computer-aided diagnosis (CAD) system provides quick and quantitative diagnosis results. CAD system for COVID-19 enables efficient diagnosis workflow and contributes to reduce such manpower shortage. In image-based diagnosis of viral pneumonia cases including COVID-19, both local and global image features are important because viral pneumonia cause many ground glass opacities and consolidations in large areas in the lung. This paper proposes an automated classification method of chest CT volumes for COVID-19 diagnosis assistance. MLP-Mixer is a recent method of image classification using Vision Transformer-like architecture. It performs classification using both local and global image features. To classify 3D CT volumes, we developed a hybrid classification model that consists of both a 3D convolutional neural network (CNN) and a 3D version of the MLP-Mixer. Classification accuracy of the proposed method was evaluated using a dataset that contains 1205 CT volumes and obtained 79.5% of classification accuracy. The accuracy was higher than that of conventional 3D CNN models consists of 3D CNN layers and simple MLP layers. © 2023 SPIE.

19.
IISE Transactions on Healthcare Systems Engineering ; 13(2):132-149, 2023.
Article in English | ProQuest Central | ID: covidwho-20239071

ABSTRACT

The global extent of COVID-19 mutations and the consequent depletion of hospital resources highlighted the necessity of effective computer-assisted medical diagnosis. COVID-19 detection mediated by deep learning models can help diagnose this highly contagious disease and lower infectivity and mortality rates. Computed tomography (CT) is the preferred imaging modality for building automatic COVID-19 screening and diagnosis models. It is well-known that the training set size significantly impacts the performance and generalization of deep learning models. However, accessing a large dataset of CT scan images from an emerging disease like COVID-19 is challenging. Therefore, data efficiency becomes a significant factor in choosing a learning model. To this end, we present a multi-task learning approach, namely, a mask-guided attention (MGA) classifier, to improve the generalization and data efficiency of COVID-19 classification on lung CT scan images. The novelty of this method is compensating for the scarcity of data by employing more supervision with lesion masks, increasing the sensitivity of the model to COVID-19 manifestations, and helping both generalization and classification performance. Our proposed model achieves better overall performance than the single-task (without MGA module) baseline and state-of-the-art models, as measured by various popular metrics.

20.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12464, 2023.
Article in English | Scopus | ID: covidwho-20239014

ABSTRACT

Deep neural networks (DNNs) are vulnerable to adversarial noises. Adversarial training is a general strategy to improve DNN robustness. But training a DNN model with adversarial noises may result in a much lower accuracy on clean data, which is termed the trade-off between accuracy and adversarial robustness. Towards lifting this trade-off, we propose an adversarial training method that generates optimal adversarial training samples. We evaluate our methods on PathMNIST and COVID-19 CT image classification tasks, where the DNN model is ResNet-18, and Heart MRI and Prostate MRI image segmentation tasks, where the DNN model is nnUnet. All these four datasets are publicly available. The experiment results show that our method has the best robustness against adversarial noises and has the least accuracy degradation compared to the other defense methods. © 2023 SPIE.

SELECTION OF CITATIONS
SEARCH DETAIL